Accelerated Bregman proximal gradient methods for relatively smooth convex optimization

نویسندگان

چکیده

We consider the problem of minimizing sum two convex functions: one is differentiable and relatively smooth with respect to a reference function, other can be nondifferentiable but simple optimize. investigate triangle scaling property Bregman distance generated by function present accelerated proximal gradient (ABPG) methods that attain an $$O(k^{-\gamma })$$ convergence rate, where $$\gamma \in (0,2]$$ exponent (TSE) distance. For Euclidean distance, we have =2$$ recover rate Nesterov’s methods. non-Euclidean distances, TSE much smaller (say \le 1$$ ), show relaxed definition intrinsic always equal 2. exploit develop adaptive ABPG converge faster in practice. Although theoretical guarantees on fast seem out reach general, our obtain empirical $$O(k^{-2})$$ rates numerical experiments several applications provide posterior certificates for rates.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Relatively-Smooth Convex Optimization by First-Order Methods, and Applications

The usual approach to developing and analyzing first-order methods for smooth convex optimization assumes that the gradient of the objective function is uniformly smooth with some Lipschitz constant L. However, in many settings the differentiable convex function f(·) is not uniformly smooth – for example in D-optimal design where f(x) := − ln det(HXH ), or even the univariate setting with f(x) ...

متن کامل

Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization

We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the non-smooth term. We show that both the basic proximal-gradient method and the accelerated proximal-gradient method achieve the s...

متن کامل

Inexact Proximal Gradient Methods for Non-convex and Non-smooth Optimization

Non-convex and non-smooth optimization plays an important role in machine learning. Proximal gradient method is one of the most important methods for solving the nonconvex and non-smooth problems, where a proximal operator need to be solved exactly for each step. However, in a lot of problems the proximal operator does not have an analytic solution, or is expensive to obtain an exact solution. ...

متن کامل

Accelerated gradient sliding for structured convex optimization

Our main goal in this paper is to show that one can skip gradient computations for gradient descent type methods applied to certain structured convex programming (CP) problems. To this end, we first present an accelerated gradient sliding (AGS) method for minimizing the summation of two smooth convex functions with different Lipschitz constants. We show that the AGS method can skip the gradient...

متن کامل

Accelerated Proximal Gradient Methods for Nonconvex Programming

Nonconvex and nonsmooth problems have recently received considerable attention in signal/image processing, statistics and machine learning. However, solving the nonconvex and nonsmooth optimization problems remains a big challenge. Accelerated proximal gradient (APG) is an excellent method for convex programming. However, it is still unknown whether the usual APG can ensure the convergence to a...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Computational Optimization and Applications

سال: 2021

ISSN: ['0926-6003', '1573-2894']

DOI: https://doi.org/10.1007/s10589-021-00273-8